Skip to main content

Script Tools

It is possible to make tools available to the agent so that it can perform more complex tasks. The tools are made available to the language model, and it decides whether it is necessary to request their execution at a given moment. The language model itself will decide when to execute them and will also fill in the parameter values, if any, using the information from the prompt and the conversation messages.

There are three different types of tools on the platform: Scripts, MCP, and Searches. In this section, we will explain how to configure Script tools.

Script Tools Configuration

Basic Setup

  • Scripts must be written in Python
  • Name: This name will be sent to the language model.
  • Description: It is important to explain when it should be used because it is with this description that the model will choose whether or not to execute it.
  • Parameters need to be defined with:
    • Name: This name will be sent to the language model.
    • Description: It is important to explain so the model can fill it with the right value.
    • Type (string, integer, etc.)
    • If it´s mandatory or not.

Available Variables

  • params: Dictionary containing defined parameters in the setup of the tool
  • chat object with properties:
    • id
    • title
    • assistant
    • like
    • comment
    • ended
    • messages
    • metadata
    • execution_status
  • messages object with properties:
    • id
    • role
    • content
    • msg_num
    • like
    • metadata
    • attachments
  • secrets: Dictionary containing defined secrets on platform configuration.

Take into account

There are forbidden imports and builtins. The complete list bellow.

forbidden_imports = "os", "sys", "importlib", "subprocess", "pickle", "eval", "exec", "shutil", "ctypes", "pathlib", "http", "assistant", "builtins", "Cognitive"

forbidden_builtins = "open", "file", "eval", "exec", "compile", "globals", "locals", "vars", "input", "print", "reload", "help", "build_class"

Important note about imports and function definitions When writing Script Tools, all code is executed within a restricted and isolated environment. Because of the way this sandbox executes scripts, top-level imports (outside functions) may not work correctly when the script also defines functions (def). If your script defines one or more functions, you must place the imports inside the function body to ensure they are correctly loaded at runtime.

Working example:

def convert():
import json
person = {"name": "Lucy", "age": 30, "profession": "Engineer"}
return json.dumps(person)

response = convert()

This version will not work:

import json

def convert():
person = {"name": "Lucy", "age": 30, "profession": "Engineer"}
return json.dumps(person)

response = convert()

Response Handling

The script must assign a value to the response variable, which will become the content of the tool's message. The tool returns a dict of response and logs, in "response" is stored the tool's message.

Example of tool calling inside a tool

# Call nested script
Script = apps.get_model("assistant", "Script")
formatter_tool = Script.objects.get(name="format_number")
result = formatter_tool.run(chat, {"number": total})

response = result["response"]

Talk to Assistants

Script tools can create and manage chats programmatically using the platform's chat service functions. This enables powerful use cases like orchestrating multi-agent workflows, delegating tasks to specialized assistants, or processing multiple queries in parallel.

Available Functions

  • create_chat(): Creates a new chat with an assistant
  • add_message(): Adds a message to an existing chat
  • create_chat_parallel(): Creates multiple chats in parallel for improved performance

Importing Chat Functions

Note that assistant is in the forbidden imports list, but you can still import specific submodules like assistant.services.chat inside your function body:

from assistant.services.chat import create_chat, add_message, create_chat_parallel
# Your code here

Function Reference

create_chat(assistant_id, message=None, message_metadata=None, chat_metadata=None, files_and_metadata=None, stream=False, is_async=False)

Creates a new chat with an assistant and optionally sends an initial message.

Parameters:

  • assistant_id (int, required): The ID of the Assistant instance to create the chat with
  • message (str, optional): Initial message text to send
  • message_metadata (dict, optional): Metadata to attach to the message
  • chat_metadata (dict, optional): Metadata to attach to the chat
  • files_and_metadata (list, optional): List of (file, metadata_dict) tuples
  • stream (bool, optional): Whether to stream the response (default: False)
  • is_async (bool, optional): Whether to process asynchronously (default: False)

Returns: Response object with chat data

add_message(chat_id, message, message_metadata=None, files_and_metadata=None, stream=False, is_async=False)

Adds a message to an existing chat and generates a response.

Parameters:

  • chat_id (int, required): The ID of the Chat instance to add the message to
  • message (str, required): The message text
  • message_metadata (dict, optional): Metadata to attach to the message
  • files_and_metadata (list, optional): List of (file, metadata_dict) tuples
  • stream (bool, optional): Whether to stream the response (default: False)
  • is_async (bool, optional): Whether to process asynchronously (default: False)

Returns: Response object with the assistant's reply

create_chat_parallel(chat_data_list, max_threads=None, timeout=None)

Creates multiple chats in parallel for improved performance.

Parameters:

  • chat_data_list (list, required): List of dictionaries with chat data. Each dictionary should contain:
    • assistant (int, required): Assistant ID
    • message (str, optional): Message text
    • message_metadata (dict, optional): Message metadata
    • chat_metadata (dict, optional): Chat metadata
  • max_threads (int, optional): Maximum number of threads to use for parallel execution
  • timeout (int, optional): Timeout in seconds for the entire operation

Returns: List of results in the same order as input. Each result is either a Response object (success) or a dict with error message (failure)

Note: Does not support stream or is_async parameters

Important Considerations

  • All functions perform automatic validation and will raise ValueError or TypeError for invalid inputs
  • Messages are validated against the assistant's max_msg_length setting
  • For add_message(), the chat must be in AVAILABLE or RUNNING status
  • Always use try-except blocks for error handling
  • When using is_async=True, the function returns immediately with {"status": "processing", "chat_id": <id>} instead of waiting for the response

Usage Examples

Creating a single chat:

from assistant.services.chat import create_chat

log.append(f"Delegating query to assistant {params['specialist_assistant_id']}")

try:
response_obj = create_chat(
assistant_id=params["specialist_assistant_id"],
message=params["query"],
chat_metadata={"delegated_from": chat.id}
)

chat_data = response_obj.data
messages = chat_data.get("messages")

if len(messages) > 2:
# Get assistant response
response = messages[2].get("content")
else:
response = "Error creating chat"

except (ValueError, TypeError) as e:
response = f"Error: {str(e)}"
log.append(f"Exception occurred: {str(e)}")

Multi-turn conversation:

from assistant.services.chat import create_chat, add_message

log.append("Creating initial chat")

# Create initial chat
new_chat = create_chat(
assistant_id=1,
message="What's the weather today?",
chat_metadata={"script_tool_id": self.id}
)

chat_id = new_chat.data.get("id") if hasattr(new_chat, 'data') else None

# Add follow-up message
if chat_id:
log.append(f"Adding follow-up message to chat {chat_id}")
follow_up = add_message(
chat_id=chat_id,
message="And what about tomorrow?",
message_metadata={"follow_up": True}
)
messages = follow_up.data.get("messages")

if len(messages) > 2:
# Get last assistant response
response = messages[-1].get("content")
else:
response = "Error generating response"
else:
log.append("Failed to get chat ID")
response = "Error creating chat"

Parallel chat processing:

from assistant.services.chat import create_chat_parallel

queries = params.get("queries", [])
assistants = params.get("assistants", [])

if len(queries) != len(assistants):
response = "Input Error: The number of queries must match the number of assistants"
else:
log.append(f"Processing {len(queries)} queries in parallel")

chat_data = [
{
"assistant": assistants[i],
"message": queries[i],
"chat_metadata": {"batch_id": f"batch_{chat.id}"}
}
for i in range(len(queries))
]

results = create_chat_parallel(
chat_data_list=chat_data,
max_threads=3,
timeout=60
)

responses = []
for i, result in enumerate(results):
if isinstance(result, dict) and "error" in result:
log.append(f"Query {i} failed: {result['error']}")
responses.append(f"Query {i} failed: {result['error']}")
else:
log.append(f"Query {i} completed successfully")
responses.append(f"Query {i} completed")

response = "\n".join(responses)

Script Tools Examples

Example of tool without parameters

This tool retrieves the current date and adds context to the system prompt by loading the value into the dictionary data["prompt_params"]["context"].

  • Name: Current date
  • Description: Execute to obtain the current date
import locale
from datetime import datetime

# Get the current date
current_date = datetime.now()

# Format the date in the desired format
formatted_date = current_date.strftime('%A %d de %B de %Y')

# Print the date with the first letter of the day capitalized
response = data["prompt_params"]["context"]["Current Date"] = formatted_date.capitalize()

Example of scripts using params

  • Name: Forbidden_keyword
  • Description: Execute if the user mentions the keyword.
correct_keyword = "correct_keyword"

if params["keyword"] == correct_keyword:
response = "The keyword is correct"
else:
response = params["keyword"] + " is not the correct keyword"

  • Parameters:
    • Name:"keyword"
    • Description:"The keyword mentioned by the user in their interaction."
    • Type: string
    • Required: True

Example of scripts using secrets

This tool invokes OpenAI within itself and sends the translation back to the main orchestrator model.

  • Name: Translate text
  • Description: Execute for translate text to Spanish.
from openai import OpenAI

client = OpenAI(api_key=secrets["apikey"])

completion = client.chat.completions.create(
model="gpt-4.1",
temperature=0,
messages=[
{"role": "system", "content": "Translate this text to Spanish"},
{"role": "user", "content": params["text"]},
],
)
response = completion.choices[0].message.content

  • Parameters:
    • Name:"text"
    • Description:"The text to translate"
    • Type: string
    • Required: True
  • Secrets:
    • apikey

Script Logging

Script tools include a built-in logging system that allows you to track execution details and debug complex workflows. Logs are automatically collected and shared across nested script calls.

Using the Log Object

The log object is automatically available in your script and provides a simple interface to record execution details:

log.append("Your message here")

Each log entry automatically includes:

  • level: The name of the script tool that generated the log
  • timestamp: When the log was created (ISO 8601 format)
  • message: Your custom message
  • thread_id: The thread identifier (useful for parallel execution)

Nested Script Logging

When a script tool calls another script tool, logs from all nested calls are automatically accumulated in a single collection. Each script logs under its own name as the level, making it easy to trace execution flow through multiple tools.

Example: Simple Nested Call

Main Script (calculate_summary):

log.append("Starting calculation")

# Get some data
total = sum([1, 2, 3, 4, 5])
log.append(f"Calculated total: {total}")

# Call nested script
Script = apps.get_model("assistant", "Script")
formatter_tool = Script.objects.get(name="format_number")
result = formatter_tool.run(chat, {"number": total})

log.append("Calculation complete")

response = result["response"]

Nested Script (format_number):

log.append("Formatting number")

number = params.get("number")
formatted = f"${number:,.2f}"

log.append(f"Formatted as: {formatted}")

response = formatted

Resulting Logs:

[
{
"level": "calculate_summary",
"timestamp": "2025-11-06T10:30:00.123456",
"message": "Starting calculation",
"thread_id": 12345
},
{
"level": "calculate_summary",
"timestamp": "2025-11-06T10:30:00.234567",
"message": "Calculated total: 15",
"thread_id": 12345
},
{
"level": "format_number",
"timestamp": "2025-11-06T10:30:00.345678",
"message": "Formatting number",
"thread_id": 12345
},
{
"level": "format_number",
"timestamp": "2025-11-06T10:30:00.456789",
"message": "Formatted as: $15.00",
"thread_id": 12345
},
{
"level": "calculate_summary",
"timestamp": "2025-11-06T10:30:00.567890",
"message": "Calculation complete",
"thread_id": 12345
}
]

Parallel Execution with Threading

The logging system is thread-safe and automatically tracks which thread created each log entry. This is particularly useful for scripts that process multiple items in parallel:

import concurrent.futures

log.append("Starting notification service")

def send_notification(user_id):
"""Send notification to one user."""
log.append(f"Sending to user {user_id}")

# Simulate API call
time.sleep(1)

log.append(f"Sent to user {user_id}")
return f"User {user_id} notified"

users = params.get("users", ["Alice", "Bob", "Charlie"])
log.append(f"Will notify {len(users)} users")

# Run in parallel with 3 threads
with concurrent.futures.ThreadPoolExecutor(max_workers=3) as executor:
results = list(executor.map(send_notification, users))

log.append(f"All done: {len(results)} notifications sent")

response = "\n".join(results)

Resulting Logs (notice different thread_id values):

[
{
"level": "send_notifications",
"timestamp": "2025-11-07T10:00:00.000000",
"message": "Starting notification service",
"thread_id": 140234567890000
},
{
"level": "send_notifications",
"timestamp": "2025-11-07T10:00:00.100000",
"message": "Will notify 3 users",
"thread_id": 140234567890000
},
{
"level": "send_notifications",
"timestamp": "2025-11-07T10:00:00.200000",
"message": "Sending to user Alice",
"thread_id": 140234567891111
},
{
"level": "send_notifications",
"timestamp": "2025-11-07T10:00:00.201000",
"message": "Sending to user Bob",
"thread_id": 140234567892222
},
{
"level": "send_notifications",
"timestamp": "2025-11-07T10:00:00.202000",
"message": "Sending to user Charlie",
"thread_id": 140234567893333
},
{
"level": "send_notifications",
"timestamp": "2025-11-07T10:00:01.205000",
"message": "Sent to user Alice",
"thread_id": 140234567891111
},
{
"level": "send_notifications",
"timestamp": "2025-11-07T10:00:01.206000",
"message": "Sent to user Bob",
"thread_id": 140234567892222
},
{
"level": "send_notifications",
"timestamp": "2025-11-07T10:00:01.207000",
"message": "Sent to user Charlie",
"thread_id": 140234567893333
},
{
"level": "send_notifications",
"timestamp": "2025-11-07T10:00:01.300000",
"message": "All done: 3 notifications sent",
"thread_id": 140234567890000
}
]